
Implicit Neural Field Reconstruction on Complex Shapes from Scattered And Noisy Data
Please login to view abstract download link
In many engineering and medical applications, physical fields under investigation and domain shapes are often measured through scattered noisy data collected from local sensors. In this context, the reconstruction of distributed quantities of interest strictly depends on the reconstruction of the domain geometry, which serves as fundamental constraint. Popular mesh-less solutions tackle the geometrical reconstruction problems by describing a shape as the zero-level set of a continuous function (Signed or Unsigned Distance Function) which is approximated by using neural networks, typically trained in a supervised fashion with pre-computed SDFs. While effective in certain medical applications, this approach is not viable when considering scattered and noisy data points. We propose a novel method that trains a neural network to learn the implicit representation of a cardiac geometry starting only from a realistic dataset composed by noisy surface-level measurements, similar to the ones obtained via catheter sensors. We design a tailored loss function based on the combination of fit and regularization terms, including a differential one based on the eikonal equation, so that the method generalizes well with data scarce and corrupted by noise. Building on this shape models we leverage neural networks to predict distributed quantities on the surface, taking into account the underlying geometry in the approximation. High accuracy and geometrical compatibility are guaranteed by combining supervised training and comparison between the reconstructed phenomena and derived quantities such as the surface’s gradient, that can be obtained via automatic differentiation. We validate effectiveness of this method through comprehensive testing on both synthetic and realistic datasets, demonstrating strong performances in reconstructing geometries and physical phenomena.